-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A couple of enhancements for MPRAGE #3
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot :D as a general comment, I was thinking to MPRAGEModel
as a memory-efficient model to calculate the approximate contrast at TI without having to store the signal intensity for all the k-space lines (but accounting for the effect of multiple RF pulses when getting the steady-state). If we want to store these signals, we have MPnRAGEModel
.
@@ -140,16 +140,16 @@ def _engine( | |||
states = epg.adiabatic_inversion(states, inv_efficiency) | |||
states = epg.longitudinal_relaxation(states, E1inv, rE1inv) | |||
states = epg.spoil(states) | |||
|
|||
signal = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we want to store magnetization for each spgr shot (e.g., to study signal modulation in k-space or for contrast-resolved reconstruction), it is better to use torchsim.models.MPnRAGEModel.
src/torchsim/models/mprage.py
Outdated
# Scan loop | ||
for p in range(nshots_bef): | ||
|
||
# Apply RF pulse | ||
states = epg.rf_pulse(states, RF) | ||
|
||
# Evolve | ||
states = epg.longitudinal_relaxation(states, E1, rE1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without multiple repetitions, everything that happens after ADC should be irrelevant.
states = epg.spoil(states) | ||
|
||
# Record signal | ||
return M0 * 1j * epg.get_signal(states) | ||
return torch.stack(signal) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe move return M0 * 1j * torch.stack(signal)
here?
Testing against Bloch simulation would greatly improve everything! I am moving this to Discussion #5 |
I think the codes need some tests also against bloch simulation ground truth, do we have any?